23 APR 2013 by ideonexus

 Intelligence Arises out of a Need to Maximize Entropy

The researchers developed a software engine, called Entropica, and gave it models of a number of situations in which it could demonstrate behaviors that greatly resemble intelligence. They patterned many of these exercises after classic animal intelligence tests. [...] "It actually self-determines what its own objective is," said Wissner-Gross. "This [artificial intelligence] does not require the explicit specification of a goal, unlike essentially any other [artificial intelligence]." ...
 1  1  notes

The more entropy, the more possibilities. Intelligence therefore seeks to maximize "future histories" in order to keep the number of possibilities maximized.

23 APR 2013 by ideonexus

 Entropy as a Purpose in Life

To the best of our knowledge, these tool use puzzle and social cooperation puzzle results represent the rst successful completion of such standard animal cognition tests using only a simple physical process. The remarkable spontaneous emergence of these sophisticated behaviors from such a simple physical process suggests that causal entropic forces might be used as the basis for a general—and potentially universal—thermodynamic model for adaptive behavior. Namely, adaptive behavior might ...
  1  notes

Researchers have created an AI that self-directs itself into activities like balancing a ball on a stick or buying stocks low and selling high by simply programming it with the desire to maximize the "future histories" available to it (ie. if the ball drops or the AI runs out of money, then the possible futures are reduced to one in the simulation). This suggests a thermodynamic relationship between intelligence and disorder; that we might seek to maximize the entropy in our lives by staying alive, making money, getting educated, or otherwise increase the number of future histories available to us.